The Linear Nonconvex Generalized Gradient

نویسنده

  • Jay S. Treiman
چکیده

A new nonconvex generalized gradient is deened and some of its calculus is developed. This generalized gradient is smaller than that of Mordukhovich but still has a good calculus. This calculus includes a rule for the linear generalized gradient of positive multiples of functions, a sum rule and a chain rule.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lagrange Multipliers for Nonconvex Generalized Gradients with Equality, Inequality and Set Constraints

A Lagrange multiplier rule for nite dimensional Lipschitz problems is proven that uses a nonconvex generalized gradient. This result uses either both the linear generalized gradient and the generalized gradient of Mordukhovich or the linear generalized gradient and a qualiication condition involving the pseudo-Lipschitz behavior of the feasible set under perturbations. The optimization problem ...

متن کامل

Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima

We establish theoretical results concerning local optima of regularized M estimators, where both loss and penalty functions are allowed to be nonconvex. Our results show that as long as the loss satisfies restricted strong convexity and the penalty satisfies suitable regularity conditions, any local optimum of the composite objective lies within statistical precision of the true parameter vecto...

متن کامل

The Linear Nonconvex Generalized Gradient and Lagrange Multipliers

A Lagrange multiplierrules that uses small generalized gradients is introduced. It includes both inequality and set constraints. The generalized gradient is the linear generalized gradient. It is smaller than the generalized gradients of Clarke and Mordukhovich but retains much of their nice calculus. Its convex hull is the generalized gradient of Michel and Penot if a function is Lipschitz. Th...

متن کامل

Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization

In many modern machine learning applications, structures of underlying mathematical models often yield nonconvex optimization problems. Due to the intractability of nonconvexity, there is a rising need to develop efficient methods for solving general nonconvex problems with certain performance guarantee. In this work, we investigate the accelerated proximal gradient method for nonconvex program...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996